Tags: data science*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. "Talk to your data. Instantly analyze, visualize, and transform."

    Analyzia is a data analysis tool that allows users to talk to their data, analyze, visualize, and transform CSV files using AI-powered insights without coding. It features natural language queries, Google Gemini integration, professional visualizations, and interactive dashboards, with a conversational interface that remembers previous questions. The tool requires Python 3.11+, a Google API key, and uses Streamlit, LangChain, and various data visualization libraries
  2. A simple explanation of the Pearson correlation coefficient with examples
  3. This article details how to build a lightweight and efficient rules engine by recasting propositional logic as sparse algebra. It guides readers through the process from theoretical foundations to practical implementation, introducing concepts like state vectors and algebraic operations for logical inference.
  4. A step-by-step guide to catching real anomalies without drowning in false alerts.
  5. This tutorial compares Polars and pandas, covering syntax, performance, LazyFrames, conversions, and plotting to help you choose the right library for your data analysis needs.
  6. This article explores how prompt engineering can be used to improve time-series analysis with Large Language Models (LLMs), covering core strategies, preprocessing, anomaly detection, and feature engineering. It provides practical prompts and examples for various tasks.
  7. The author discusses a shift in approach to clustering mixed data, advocating for starting with the simpler Gower distance metric before resorting to more complex embedding techniques like UMAP. They introduce 'Gower Express', an optimized and accelerated implementation of Gower.
  8. This article details a hands-on approach to modeling rare events in time series data using Python. It covers data exploration, defining extreme events, fitting distributions (GEV, Weibull, Gumbel), and evaluating model performance using metrics like log-likelihood, AIC, and BIC. The example uses weather data and provides code snippets for implementation.
  9. This article explores the impact of hyperparameters on random forests, both in terms of performance and visual representation. It compares the performance of a default random forest with tuned decision trees and examines the effects of various hyperparameters like `n_estimators`, `max_depth`, and `ccp_alpha` using visualizations of individual trees, predictions, and errors.
  10. Extracting structured information effectively and accurately from long unstructured text with LangExtract and LLMs. This article explores Google’s LangExtract framework and its open-source LLM, Gemma 3, demonstrating how to parse an insurance policy to surface details like exclusions.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "data science"

About - Propulsed by SemanticScuttle